PC Week SQL Database Server Benchmark Description Revision History
January 2000, Version 2.7
¨ Scripts revised to work under Red Hat Linux 6.1.
July 1999, Version 2.61 (for PC Magazine database review)
¨ Extra implementation code and instructions in preparation for Web posting.
April 1999, Version 2.6 (for PC Magazine database review)
¨ Added requirement that vendors need to agree in writing to having benchmark numbers published before they receive part II of the specification.
¨ Added comment that we will watch CPU utilization during the Online Reporting (section 4.4.2) and Online Backup (section 4.4.3) tests.
¨ In the Online Reporting Test, we will be running the reporting mix script multiple times in a row to ensure it runs long enough to show up on our OLTP measurements.
¨ In the Update Statistics Test, we will re-run stats just for the updates and fourmill tables, not the entire database.
¨ Server and client hardware information updated. We’re using a Hewlett-Packard NetServer LH 4 with 2 x 500MHz Pentium II Xeon CPUs and 512MB of RAM.
¨ Added clarification “We will be re-imaging the OS drive from a pre-setup master and reformatting all other drives between vendors.”
¨ Modified the list of baseline services to reflect Windows NT Service Pack 4 changes.
¨ We will be not using any think times. At a 3 second mean think time, the throughput curves were not leveling off.
¨ Dropped a few of the small user loads to save testing time. The run sequence is now 1, 25, 50, 75, 100, 200 and 300 (7 iterations). The Online Reporting and Online Backup tests will be run with the OLTP Read/Write Mix Test at 300 users, not 100 users.
¨ ANSI Isolation Level details changed to “All the write transactions must be written so non-repeatable reads are not possible. Vendors may ensure this by setting the transaction isolation level for these transactions to Repeatable Read or Serializable, or hint the transactions so the appropriate selects take update locks. All other transactions can be performed at ANSI Isolation Level 2, Read Committed.”
¨ Added point 10.11, “All Computations Must Be Done In Real-Time: Any form of query pre-computation or post-execution results caching (through summary tables, materialized views, etc.) is not allowed. Queries must be actually calculated only when they are submitted.”
¨ Clarified what 5% means in sec. 10.14, “Measurement Policy.”
¨ Added the condition “Products must be delivered in a shrink-wrapped box, and must be currently-available, purchasable shipping code.”
March 1999, Version 2.5 (for PC Magazine database review)
¨ Divided the tests into scalability (performance) and availability tests.
¨ Modified sizing and run times to fit the smaller database server hardware.
¨ Test changed to use only dynamic SQL or stored procedures (no C code).
¨ Windows NT Service Pack 4 to be installed on the server.
¨ Moved table structure and indexing details to implementation document, generally shortened benchmark description document.
March 1998, Version 2.1
¨ Updated benchmark status in Section 2.
¨ Exact details given on server hardware and logical drive volume configuration (the main change is that we are using 60 drives instead of 40).
¨ Windows NT Service Pack 3 will be installed on all machines.
¨ Strengthened full-disclose section slightly.
¨ Allowed modification of database configuration files between tests since the mixed workload test now provides a performance measure for OLTP and DSS activity on the same configuration.
¨ Raw device storage allowed.
¨ Took out the requirement that a clustered index (index with data pages stored in index leaf nodes) must be used. Not all vendors support this type of index.
¨ Defined nullability requirements for data tables.
¨ Data size increased to 140 million rows, ~14GB of unindexed data.
¨ Backup will be done to a set of disks instead of a tape array. One vendor felt the tape hardware provided unfair optimization opportunities.
¨ Changes to how the load and import test is measured.
¨ Only the date (rather than date and time) need be stored in date columns. The column must still occupy 8 bytes of storage (using padding if necessary).
¨ Added the constraint that no third party or otherwise separately purchased tools can be used in the benchmark.
¨ We will capture the output of a single user run of all queries to disk and verify query result correctness.
¨ OLTP and Mixed Workload measurement interval increased to 20 minutes.
¨ Backup and restore test clarified to state that either table- or tablespace-level backup is acceptable and that backup and restores can occur when the database engine is offline.
¨ Clarified the vendor staff requirement by stating that the same two people do not have to be here the whole test period.
¨ Some changes to table constraints and index arrangements (a few more changes will occur as we optimize our mix).
¨ Removed a few pages of extraneous material to slim down the document.
December 1997, Version 2.0
¨ The BLOB tests have been dropped in favor of a Mixed Workload Mix that combines simultaneous OLTP- and DSS-type transactions. We felt that since mixed workload testing was one of our goals, we needed a specific test for it. In addition, this reduces the coding requirements on vendors substantially since the BLOB code would have been very different from the rest of the benchmark code.
¨ We converted the single user timed-run tests to multi-user transaction per second tests, so all our tests operate the same way.
¨ We will weight DSS query submission frequency in the Ad Hoc DSS Query Mix and Mixed Workload Mix to keep long-running DSS queries from skewing total desired workload. Since we’re now not measuring total time for any query tests (transactions per second is our only query metric), long DSS queries won’t have nearly the impact they would in any case.
¨ OLTP and Mixed Workload Mix test duration lengthened from 10 to 30 minutes. In particular, warm up time was increased to 10 minutes to allow more time for the cache to fill and measurement time increased to 15 minutes to allow more queries to be submitted to the system under test.
¨ Ad Hoc DSS Mix test duration lengthened to 8 hours as a result of in-house testing of our query set. (We expect some queries will take between one and two hours to complete.)
¨ Statistics tables must be fully regenerated after finishing indexing. This time will be included in indexing time.
¨ Added full disclosure requirement per vendor request.
October 1997, Version 1.93
¨ Server RAM increased to 1GB and number of disks increased to 40.
¨ Clients will be running Windows NT Workstation, not Windows 95. Some clients will be Pentium 166 machines instead of all Pentium Pro 200s.
¨ Vendors may configure data disks and devices any way they wish. Key issue is that for some databases, device configuration affects degree of parallelism possible.
¨ Size of test database doubled to over 6GB as a result of vendor comments and a survey we did of our PC Week Corporate Partner Advisory Board. (A rule-of-thumb suggested to us was that the database should be 4 x RAM to be sure the cache wasn’t unrealistically effective.)
September 1997, Version 1.92
¨ We will be performing the Backup and Restore test using a DAT tape array, not the server’s disks. This is a more realistic backup scenario.
August 1997, Version 1.91
¨ We will be checking ACID properties are maintained by verifying that the totals of selected columns match the number of committed and rolled-back transactions.
August 1997, Version 1.9
¨ Original version released for comment.